
Hyper Traxx
Uncontrollable Innovations
0
|
Posted - 2013.08.27 02:08:00 -
[1] - Quote
Whitehound wrote: The way an A/D converter works requires it to hold the signal for a certain time until it has been digitized. There are many implementations for A/D converters, but a popular one is a successive approximation of the input signal by using a comparator and to compare the input to the output of a D/A converter until all bits have been matched. In order to achieve this is the input signal stored in a capacitor. The method is nothing short of brutal to anyone familiar with analogue electronics. Even capacitors have a characteristic.
As one who has not only studied how A/D conversion takes place, but instructed on the various A/D conversion processes, I couldn't help but notice you are explaining the A/D conversion process in the most convoluted manner possible. Sure you chose a Successive Approximation (SAL) converter for your example, and SAL conversion takes place fairly quickly, which would make it seem to be the best - you then completely screw up how it actually works. The input voltage is *sampled* by an audio-grade A/D converter, at a precise interval (i.e. 44.1 kHz). The 'sampling' functions much more like a latch, in that the sampling is of the voltage present at the input at that instant - *far* shorter than the 22.68us you claim the audio is 'held' at. Your inference that this somehow changes the input is flat out false. The (audio) analog input to the SAL first enters an Operation Amplifier - a device specifically designed to *not* alter the signal - especially in the case of an audio-grade component. Any capacitance used to store the charge during conversion will be on the other side of that op-amp and will have a negligible impact on the analog source. The capacitance does nothing 'brutal' to the source at all, nor does it change in any measurable bit during the successive approximation process - if it did, the SAL would fail at its task and randomly output samples that were grossly miscalculated (which would be *quite* audible as a 'tick' when playing back the conversion).
...now your argument holds some water if you look at what ends up stored, which is a waveform shifting in 22.68us 'steps'. Yup, you're absolutely correct there - so long as all you're doing is zooming way in on a digital recording. The catch is that any decent D/A conversion process involves super-sampling, which effectively removes the jagged steps from the raw digital source as it is being converted back into analog form. This is *in addition to* the fact that the audio signal has been oversampled in the first place (sample rate is >2x the highest frequency to be sampled). If you could freeze frame an (analog) oscilloscope output taken straight from the recording microphone, and compare that to a digital signal that had been sampled from the same source, you would be hard pressed to identify any difference. If you zoomed in enough to see the jaggies, that is not representative of what actually comes out of the D/A conversion. Passing the same signal, post-A/D/A conversion, back through the same analog scope would reveal a virtually identical image, regardless of how closely you decide to look at it. Further, since the sample rate was >2k oversampled in the first place, any smoothing effect resulting from the supersampling process will be inaudible as they are occurring at higher than the specified (audible) range.
Some people can in fact perceive audio >22kHz (sampled at 44.1 kHz), but moving to either 48 or 96 kHz formats evens out the field rather quickly.
You've cited your comparisons of the 'same' recordings on both vinyl and cd formats. I assure you they are most likely *not* the 'same'. The vinyl master used for pressing may be the very same used for sampling the digital master, but then it was more than likely altered, even slightly, by the audio engineers prior to creating the CD master. The most common culprit is compression (of dynamic range, not MP3). Early classical CD recordings had to be turned up fairly high on volume in order for the quiet parts to be heard in non-ideal environments. Studios very quickly caved to negative feedback and dialed in at least some dynamic range compression. If you've ever toyed with it yourself, you'll see just how quickly that type of compression kills off the 'warmth' of a source. Further, some studios make the mistake of converting to digital at a higher sample rate (i.e. 96 or 192 kHz), and then using sample-rate conversion to drop that down to 44.1 kHz. Sampling at 96 kHz is great so long as you also listen to it at that rate. Down-converting to 44.1 (an uneven increment of 96) introduces artifacts that some can perceive.
Your comparisons to photography are null and void. We are nowhere near resolution 'mastery' of digital imagery, but for audio, the technology has been more than enough to appropriately oversample for decades. It will still be a while before we can equivalently 'oversample' a photographed scene with a simple digital camera. |